Learning directions of objects specified by vision, spatial audition, or auditory spatial language.
نویسندگان
چکیده
The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and audition (differential binaural signals). There remained a learning disadvantage for spatial language. We attribute this result to the cost of indirect processing from words to spatial representations.
منابع مشابه
Selective deficits in human audition: evidence from lesion studies
The human auditory cortex is the gateway to the most powerful and complex communication systems and yet relatively little is known about its functional organization as compared to the visual system. Several lines of evidence, predominantly from recent studies, indicate that sound recognition and sound localization are processed in two at least partially independent networks. Evidence from human...
متن کاملSelective deficits in human audition: evidence from lesion studies
The human auditory cortex is the gateway to the most powerful and complex communication systems and yet relatively little is known about its functional organization as compared to the visual system. Several lines of evidence, predominantly from recent studies, indicate that sound recognition and sound localization are processed in two at least partially independent networks. Evidence from human...
متن کاملIntermodal spatial attention differs between vision and audition: an event-related potential analysis.
Subjects were required to attend to a combination of stimulus modality (vision or audition) and location (left or right). Intermodal attention was measured by comparing event-related potentials (ERPs) to visual and auditory stimuli when the modality was relevant or irrelevant, while intramodal (spatial) attention was measured by comparing ERPs to visual and auditory stimuli presented at relevan...
متن کاملSpatial working memory for locations specified by vision and audition: testing the amodality hypothesis.
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direc...
متن کاملCross-modal interactions between audition, touch, and vision in endogenous spatial attention: ERP evidence on preparatory states and sensory modulations.
Recent behavioral and event-related brain potential (ERP) studies have revealed cross-modal interactions in endogenous spatial attention between vision and audition, plus vision and touch. The present ERP study investigated whether these interactions reflect supramodal attentional control mechanisms, and whether similar cross-modal interactions also exist between audition and touch. Participant...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Learning & memory
دوره 9 6 شماره
صفحات -
تاریخ انتشار 2002